31 research outputs found

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Convolutive Block-Matching Segmentation Algorithm with Application to Music Structure Analysis

    Full text link
    Music Structure Analysis (MSA) consists of representing a song in sections (such as ``chorus'', ``verse'', ``solo'' etc), and can be seen as the retrieval of a simplified organization of the song. This work presents a new algorithm, called Convolutive Block-Matching (CBM) algorithm, devoted to MSA. In particular, the CBM algorithm is a dynamic programming algorithm, applying on autosimilarity matrices, a standard tool in MSA. In this work, autosimilarity matrices are computed from the feature representation of an audio signal, and time is sampled on the barscale. We study three different similarity functions for the computation of autosimilarity matrices. We report that the proposed algorithm achieves a level of performance competitive to that of supervised state-of-the-art methods on 3 among 4 metrics, while being fully unsupervised.Comment: 4 pages, 5 figures, 1 table. Submitted at ICASSP 2023. The associated toolbox is available at https://gitlab.inria.fr/amarmore/autosimilarity_segmentatio

    A Homotopy-based Algorithm for Sparse Multiple Right-hand Sides Nonnegative Least Squares

    Full text link
    Nonnegative least squares (NNLS) problems arise in models that rely on additive linear combinations. In particular, they are at the core of nonnegative matrix factorization (NMF) algorithms. The nonnegativity constraint is known to naturally favor sparsity, that is, solutions with few non-zero entries. However, it is often useful to further enhance this sparsity, as it improves the interpretability of the results and helps reducing noise. While the â„“0\ell_0-"norm", equal to the number of non-zeros entries in a vector, is a natural sparsity measure, its combinatorial nature makes it difficult to use in practical optimization schemes. Most existing approaches thus rely either on its convex surrogate, the â„“1\ell_1-norm, or on heuristics such as greedy algorithms. In the case of multiple right-hand sides NNLS (MNNLS), which are used within NMF algorithms, sparsity is often enforced column- or row-wise, and the fact that the solution is a matrix is not exploited. In this paper, we first introduce a novel formulation for sparse MNNLS, with a matrix-wise â„“0\ell_0 sparsity constraint. Then, we present a two-step algorithm to tackle this problem. The first step uses a homotopy algorithm to produce the whole regularization path for all the â„“1\ell_1-penalized NNLS problems arising in MNNLS, that is, to produce a set of solutions representing different tradeoffs between reconstruction error and sparsity. The second step selects solutions among these paths in order to build a sparsity-constrained matrix that minimizes the reconstruction error. We illustrate the advantages of our proposed algorithm for the unmixing of facial and hyperspectral images.Comment: 20 pages + 7 pages supplementary materia

    A primer for resonant tunnelling

    Get PDF
    Resonant tunnelling is studied numerically and analytically with the help of a three-well quantum one-dimensional time-independent model. The simplest cases are considered where the three-well potential is polynomial or piecewise constant.Comment: accepted to EJP, 19 pages, 8 figure

    Exploring Relationships Among Belief in Genetic Determinism, Genetics Knowledge, and Social Factors

    Get PDF

    Régularisation implicite des factorisations de faible rang pénalisées

    No full text
    Scale invariance is a well-known property of matrix and tensor factorization models. It is usually considered only as a source of ambiguity during inference. However, when regularizations which are not scale-invariant are added to the cost function, scale-invariance induces an implicit regularization that balances the estimated factors. These behaviors have been partially documented formally, but have not been accounted for practically. In this work, I further discuss this implicit regularization and show empirically how to adapt existing algorithms for improved robustness to hyperparameter choice and improved precision.L'invariance d'échelle des modèles de factorisation matriciels et tensoriels est une propriété bien connue, habituellement perçue comme une source d'ambiguïté lors de l'inférence des paramètres de ces modèles. Cependant, lorsque ces modèles de factorisation sont employés dans une approche variationnelle, typiquement lorsque de la parcimonie est imposée, l'invariance d'échelle induit une régularisation implicite qui équilibre les solutions. En adaptant un algorithme d'optimisation classique, je montre empiriquement que l'estimation des paramètres pour des décompositions tensorielles pénalisées devient plus précise et fiable

    Environmental Multiway Data Mining

    No full text
    Parmi les techniques usuelles de fouille de données, peu sont celles capables de tirer avantage de la complémentarité des dimensions pour des données sous forme de tableaux à plusieurs dimensions. A l'inverse les techniques de décomposition tensorielle recherchent spécifiquement les processus sous-jacents aux données, qui permettent d'expliquer les données dans toutes les dimensions. Les travaux rapportés dans ce manuscrit traitent de l'amélioration de l'interprétation des résultats de la décomposition tensorielle canonique polyadique par l'ajout de connaissances externes au modèle de décomposition, qui est par définition un modèle aveugle n'utilisant pas la connaissance du problème physique sous-jacent aux données. Les deux premiers chapitres de ce manuscrit présentent respectivement les aspects mathématiques et appliqués des méthodes de décomposition tensorielle. Dans le troisième chapitre, les multiples facettes des décompositions sous contraintes sont explorées à travers un formalisme unifié. Les thématiques abordées comprennent les algorithmes de décomposition, la compression de tenseurs et la décomposition tensorielle basée sur les dictionnaires. Le quatrième et dernier chapitre présente le problème de la modélisation d'une variabilité intra-sujet et inter-sujet au sein d'un modèle de décomposition contraint. L'état de l'art en la matière est tout d'abord présenté comme un cas particulier d'un modèle flexible de couplage de décomposition développé par la suite. Le chapitre se termine par une discussion sur la réduction de dimension et quelques problèmes ouverts dans le contexte de modélisation de variabilité sujet.Among commonly used data mining techniques, few are those which are able to take advantage of the multiway structure of data in the form of a multiway array. In contrast, tensor decomposition techniques specifically look intricate processes underlying the data, where each of these processes can be used to describe all ways of the data array. The work reported in the following pages aims at incorporating various external knowledge into the tensor canonical polyadic decomposition, which is usually understood as a blind model. The first two chapters of this manuscript introduce tensor decomposition techniques making use respectively of a mathematical and application framework. In the third chapter, the many faces of constrained decompositions are explored, including a unifying framework for constrained decomposition, some decomposition algorithms, compression and dictionary-based tensor decomposition. The fourth chapter discusses the inclusion of subject variability modeling when multiple arrays of data are available stemming from one or multiple subjects sharing similarities. State of the art techniques are studied and expressed as particular cases of a more general flexible coupling model later introduced. The chapter ends on a discussion on dimensionality reduction when subject variability is involved, as well a some open problems

    Semi-Supervised Convolutive NMF for Automatic Piano Transcription

    No full text
    Published at the 2022 Sound and Music Computing (SMC) conference, 7 pages, 5 figures, 3 tables, code available at https://github.com/cohenjer/TransSSCNMFAutomatic Music Transcription, which consists in transforming an audio recording of a musical performance into symbolic format, remains a difficult Music Information Retrieval task. In this work, which focuses on piano transcription, we propose a semi-supervised approach using low-rank matrix factorization techniques, in particular Convolutive Nonnegative Matrix Factorization. In the semi-supervised setting, only a single recording of each individual notes is required. We show on the MAPS dataset that the proposed semi-supervised CNMF method performs better than state-of-the-art low-rank factorization techniques and a little worse than supervised deep learning state-of-the-art methods, while however suffering from generalization issues

    Semi-Supervised Convolutive NMF for Automatic Piano Transcription

    No full text
    Published at the 2022 Sound and Music Computing (SMC) conference, 7 pages, 5 figures, 3 tables, code available at https://github.com/cohenjer/TransSSCNMFAutomatic Music Transcription, which consists in transforming an audio recording of a musical performance into symbolic format, remains a difficult Music Information Retrieval task. In this work, which focuses on piano transcription, we propose a semi-supervised approach using low-rank matrix factorization techniques, in particular Convolutive Nonnegative Matrix Factorization. In the semi-supervised setting, only a single recording of each individual notes is required. We show on the MAPS dataset that the proposed semi-supervised CNMF method performs better than state-of-the-art low-rank factorization techniques and a little worse than supervised deep learning state-of-the-art methods, while however suffering from generalization issues

    Convolutive Block-Matching Segmentation Algorithm with Application to Music Structure Analysis

    No full text
    4 pages, 5 figures, 1 table. Submitted at ICASSP 2023. The associated toolbox is available at https://gitlab.inria.fr/amarmore/autosimilarity_segmentationMusic Structure Analysis (MSA) consists of representing a song in sections (such as ``chorus'', ``verse'', ``solo'' etc), and can be seen as the retrieval of a simplified organization of the song. This work presents a new algorithm, called Convolutive Block-Matching (CBM) algorithm, devoted to MSA. In particular, the CBM algorithm is a dynamic programming algorithm, applying on autosimilarity matrices, a standard tool in MSA. In this work, autosimilarity matrices are computed from the feature representation of an audio signal, and time is sampled on the barscale. We study three different similarity functions for the computation of autosimilarity matrices. We report that the proposed algorithm achieves a level of performance competitive to that of supervised state-of-the-art methods on 3 among 4 metrics, while being fully unsupervised
    corecore